---
title: Console
description: The NextGen DataRobot Console provides critical management, monitoring, and governance features in a refreshed, modern user interface, familiar to users of MLOps features in DataRobot Classic.
section_name: NextGen Console
maturity: open-public-preview

---

# Console {: #console }

!!! info "Availability information"
    The NextGen Console is on by default.

    <b>Feature flag:</b> Enable Console

The NextGen DataRobot Console provides important [management](#management), [monitoring](#monitoring), and [governance](#governance) features in a refreshed, modern user interface, familiar to users of [MLOps features in DataRobot Classic](mlops/index):

![](images/nxt-console-inventory.png)

This updated user interface provides a seamless transition from model experimentation and registration&mdash;in the NextGen [Workbench](wb-overview) and [Registry](nxt-registry/index)&mdash;to model monitoring and management through deployments in Console, all while maintaining the user experience you are accustomed to. This document provides links to the DataRobot Classic documentation for the features you can find in the NextGen experience. 

## Management {: #management }

To learn more about the NextGen Console model management settings, you can review the following DataRobot Classic documentation:

![](images/nxt-console-settings.png)

| Topic                                                                        | Describes                                                      |
|------------------------------------------------------------------------------|----------------------------------------------------------------|
| [Deployment inventory](deploy-inventory){ target=_blank }                    | View and access deployments and deployment management options. |
| [Set up service health monitoring](service-health-settings){ target=_blank } | Enable [segmented analysis](deploy-segment) to assess service health, data drift, and accuracy statistics by filtering them into unique segment attributes and values. |
| [Set up data drift monitoring](data-drift-settings){ target=_blank }         | Enable [data drift monitoring](data-drift) on a deployment's Data Drift Settings tab. |
| [Set up accuracy monitoring](accuracy-settings){ target=_blank }             | Enable [accuracy monitoring](deploy-accuracy) on a deployment's Accuracy Settings tab. |
| [Set up fairness monitoring](fairness-settings){ target=_blank }             | Enable [fairness monitoring](mlops-fairness) on a deployment's Fairness Settings tab. |
| [Set up humility rules](humility-settings){ target=_blank }                  | Enable [humility monitoring](humble) by creating rules which enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before. |
| [Configure retraining](retraining-settings){ target=_blank }                 | Enable [Automated Retraining](set-up-auto-retraining) for a deployment by defining the general retraining settings and then creating retraining policies. |
| [Set up retraining policies](set-up-auto-retraining){ target=_blank }        | Establish retraining policies to maintain model performance after deploying. |
| [Configure challengers](challengers-settings){ target=_blank }               | Enable [challenger comparison](challengers) by configuring a deployment to store prediction request data at the row level and replay predictions on a schedule. |
| [Review predictions settings](predictions-settings){ target=_blank }         | Review the Predictions Settings tab to view details about your deployment's inference data. |
| [Enable data export](data-export-settings){ target=_blank }                  | Enable [data export](data-export) to compute and monitor custom business or performance metrics. | 
| [Set up custom metrics monitoring](custom-metrics-settings){ target=_blank } | Enable [custom metrics](custom-metrics) monitoring by defining the "at risk" and "failing" thresholds for the custom metrics you created. |
| [Set prediction intervals for time series deployments](predictions-settings#set-prediction-intervals-for-time-series-deployments){ target=_blank } | Enable [prediction intervals](ts-predictions#prediction-preview) in the prediction response for deployed time series models. |
| [Replace deployed models](deploy-replace){ target=_blank }                   | Replace the model used for a deployment. |

## Monitoring {: #monitoring }

To learn more about the NextGen Console model monitoring features, you can review the following DataRobot Classic documentation:

![](images/nxt-console-accuracy.png)

| Topic                                                  | Describes                                           |
|--------------------------------------------------------|-----------------------------------------------------|
| [Deployments](deploy-inventory){ target=_blank }       | View the deployment inventory' service health lens for monitoring status indicators and prediction activity information. |
| [Deployment overview](dep-overview){ target=_blank }   | View an overview of the deployment's content and history. |
| [Notifications](deploy-notifications){ target=_blank } | Configure deployment notifications and monitoring.  |
| [Service Health](service-health){ target=_blank }      | Track model-specific deployment latency, throughput, and error rate. |
| [Data Drift](data-drift){ target=_blank }              | Monitor model accuracy based on data distribution.  |
| [Accuracy](deploy-accuracy){ target=_blank }           | Analyze performance of a model over time. |
| [Challenger Models](challengers){ target=_blank }      | Compare model performance post-deployment.  |
| [Usage](deploy-usage){ target=_blank }                 | Track prediction processing progress for use in accuracy, data drift, and predictions over time analysis. |
| [Data Export](data-export){ target=_blank }            | Export a deployment's stored prediction data, actuals, and training data to compute and monitor custom business or performance metrics. |
| [Custom Metrics](custom-metrics){ target=_blank }      | Create and monitor up to 25 custom business or performance metrics. |
| [Segmented analysis](deploy-segment){ target=_blank }  | Track attributes for segmented analysis of training data and predictions. |
| [MLOps agent](mlops-agent/index){ target=_blank }      | Monitor remote models.  |

## Governance {: #governance }

To learn more about the NextGen Console model governance settings, you can review the following DataRobot Classic documentation:

![](images/nxt-console-humility.png)

| Topic                                                       | Describes                      |
|-------------------------------------------------------------|--------------------------------|
| [Deployment governance lens](gov-lens){ target=_blank }     | View the deployment inventory's governance lens for humility and fairness status indicators, ownership and infrastructure information, and deployment history. |
| [Deployment history](dep-overview#history){ target=_blank } | View an overview of the deployment's history,  |
| [Deployment approval](dep-admin){ target=_blank }           | Define a deployment approval workflow to control the creation of new deployments or updates to existing deployments. |
| [Notifications](deploy-notifications){ target=_blank }      | Configure deployment notifications and monitoring.  |
| [Humility rules](humility-settings){ target=_blank }        | Monitor deployments to recognize, in real-time, when the deployed model makes uncertain predictions or receives data it has not seen before. |
| [Fairness monitoring](fairness-settings){ target=_blank }   | Monitor deployments to recognize when protected features fail to meet predefined fairness criteria. |
| [Deployment reports](deploy-reports){ target=_blank }       | Generate ongoing monitoring reports, compiling deployment status, charts, and overall model quality information into a shareable report. |

